由于其高识别精度,包括移动设备的面部解锁,社区访问控制系统和城市监视,因此在许多领域都使用了面部识别技术。由于非常深的网络结构可以保证当前的高精度,因此通常需要将面部图像传输到具有高计算能力以进行推理的第三方服务器。但是,面部图像在视觉上揭示了用户的身份信息。在此过程中,不受信任的服务提供商和恶意用户都可以显着增加个人隐私漏洞的风险。当前的隐私识别方法通常伴随着许多副作用,例如推理时间的显着增加或明显的识别准确性下降。本文提出了使用频域中使用差异隐私的保护隐私面部识别方法。由于利用了差异隐私,它在理论上提供了隐私的保证。同时,准确性的丧失非常小。该方法首先将原始图像转换为频域,并删除称为DC的直接组件。然后,可以根据差异隐私框架内的后端面部识别网络的丢失来学习隐私预算分配方法。最后,它为频域特征添加了相应的噪声。根据广泛的实验,我们的方法在几个经典的面部识别测试集中表现出色。
translated by 谷歌翻译
神经体系结构搜索(NAS)的主要挑战之一是有效地对体系结构的性能进行排名。绩效排名者的主流评估使用排名相关性(例如,肯德尔的tau),这对整个空间都同样关注。但是,NAS的优化目标是识别顶级体系结构,同时对搜索空间中其他体系结构的关注更少。在本文中,我们从经验和理论上都表明,标准化的累积累积增益(NDCG)对于排名者来说是一个更好的指标。随后,我们提出了一种新算法Acenas,该算法直接通过Lambdarank优化NDCG。它还利用体重共享NAS产生的弱标签来预先培训排名,以便进一步降低搜索成本。对12个NAS基准和大规模搜索空间进行的广泛实验表明,我们的方法始终超过SOTA NAS方法,精度提高了3.67%,搜索成本降低了8倍。
translated by 谷歌翻译
随着面部识别系统的广泛应用,人们担心原始的面部图像可能会暴露于恶意意图并因此导致个人隐私漏洞。本文介绍了Duetface,这是一种新型的隐私面部识别方法,该方法采用了频域中的协作推断。从违反直觉的发现开始,即面部识别只能通过视觉上无法区分的高频通道就可以实现出人意料的良好性能,此方法通过其可视化的关键性设计了可信的频道划分,并在非重要通道上操作服务器端模型。但是,由于缺少的视觉信息,该模型在注意力特征上的注意力降低了。为了补偿,该方法引入了插件交互式块,以通过产生功能掩码来从客户端转移注意力。通过得出和覆盖感兴趣的面部区域(ROI),进一步完善了面具。在多个数据集上进行的广泛实验验证了所提出的方法在保护面部图像免受不希望的视觉检查,重建和识别的同时保持高任务可用性和性能的有效性。结果表明,所提出的方法实现了对未受保护的弧形的可比识别精度和计算成本,并优于最先进的隐私保护方法。源代码可在https://github.com/tencent/tcace/tree/master/recognition/tasks/duetface上获得。
translated by 谷歌翻译
机器学习模型与虚假相关性的脆弱性主要在监督学习(SL)的背景下进行了讨论。但是,缺乏对虚假相关性如何影响流行的自学学习(SSL)和基于自动编码器模型(AE)的表现的见解。在这项工作中,我们通过评估这些模型在现实世界和合成分配变化数据集上的性能来阐明这一点。在观察到线性头可能容易受到虚假相关性的观察之后,我们使用对分布外(OOD)数据训练的线性头制定了一种新颖的评估方案,以将预训练模型的性能隔离为潜在的偏差用于评估的线性头。通过这种新方法,我们表明SSL模型始终比AE和SL模型在OOD概括方面始终更健壮,因此在OOD概括方面更好。
translated by 谷歌翻译
我们提出了Adios,这是一个用于自我监督学习的遮罩图像模型(MIM)框架,同时使用对抗性目标学习掩盖功能和图像编码器。对图像编码器进行了训练,以最大程度地减少原始图像的表示形式与蒙版图像的表示之间的距离。相反,掩蔽函数旨在最大化此距离。阿迪奥斯(Adios)始终改进有关各种任务和数据集的最先进的自我监督学习(SSL)方法 - 包括Imagenet100和STL10上的分类,CIFAR10/100上的转移学习,Flowers102和Inaturalist,以及鲁棒性在背景挑战中进行了评估(Xiao等,2021) - 同时产生语义意义的面具。与MAE,BEIT和IBOT等现代MIM模型不同,Adios不依赖视觉变压器的图像斑点令牌构造,并且可以用卷积的骨架来实现。我们进一步证明,与对流行MIM模型中使用的掩盖方案相比,阿迪奥斯学到的面具在改善SSL方法的表示方面更有效。
translated by 谷歌翻译
Multimodal VAEs seek to model the joint distribution over heterogeneous data (e.g.\ vision, language), whilst also capturing a shared representation across such modalities. Prior work has typically combined information from the modalities by reconciling idiosyncratic representations directly in the recognition model through explicit products, mixtures, or other such factorisations. Here we introduce a novel alternative, the MEME, that avoids such explicit combinations by repurposing semi-supervised VAEs to combine information between modalities implicitly through mutual supervision. This formulation naturally allows learning from partially-observed data where some modalities can be entirely missing -- something that most existing approaches either cannot handle, or do so to a limited extent. We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes on the MNIST-SVHN (image-image) and CUB (image-text) datasets. We also contrast the quality of the representations learnt by mutual supervision against standard approaches and observe interesting trends in its ability to capture relatedness between data.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译